deletion capacity
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.67)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.67)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.93)
- Government (0.67)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.93)
- Government (0.68)
Fully Decentralized Certified Unlearning
Lamri, Hithem, Maniatakos, Michail
Machine unlearning (MU) seeks to remove the influence of specified data from a trained model in response to privacy requests or data poisoning. While certified unlearning has been analyzed in centralized and server-orchestrated federated settings (via guarantees analogous to differential privacy, DP), the decentralized setting -- where peers communicate without a coordinator remains underexplored. We study certified unlearning in decentralized networks with fixed topologies and propose RR-DU, a random-walk procedure that performs one projected gradient ascent step on the forget set at the unlearning client and a geometrically distributed number of projected descent steps on the retained data elsewhere, combined with subsampled Gaussian noise and projection onto a trust region around the original model. We provide (i) convergence guarantees in the convex case and stationarity guarantees in the nonconvex case, (ii) $(\varepsilon,δ)$ network-unlearning certificates on client views via subsampled Gaussian Rényi DP (RDP) with segment-level subsampling, and (iii) deletion-capacity bounds that scale with the forget-to-local data ratio and quantify the effect of decentralization (network mixing and randomized subsampling) on the privacy-utility trade-off. Empirically, on image benchmarks (MNIST, CIFAR-10), RR-DU matches a given $(\varepsilon,δ)$ while achieving higher test accuracy than decentralized DP baselines and reducing forget accuracy to random guessing ($\approx 10\%$).
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.40)
- Europe > Austria > Vienna (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- (6 more...)
- Information Technology > Security & Privacy (1.00)
- Law (0.93)
- Government > Regional Government (0.67)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.67)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.67)
Mo' Memory, Mo' Problems: Stream-Native Machine Unlearning
Machine unlearning work assumes a static, i.i.d training environment that doesn't truly exist. Modern ML pipelines need to learn, unlearn, and predict continuously on production streams of data. We translate batch unlearning to the online setting using notions of regret, sample complexity, and deletion capacity. We tighten regret bounds to a logarithmic $\mathcal{O}(\ln{T})$, a first for a certified unlearning algorithm. When fitted with an online variant of L-BFGS optimization, the algorithm achieves state of the art regret with a constant memory footprint. Such changes extend the lifespan of an ML model before expensive retraining, making for a more efficient unlearning process.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > New York (0.04)
- North America > United States > Michigan > Wayne County > Detroit (0.04)
- (4 more...)
- Information Technology > Security & Privacy (0.67)
- Education (0.66)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.93)
- Government (0.67)